778 research outputs found
Conditions for the equivalence between IQC and graph separation stability results
This paper provides a link between time-domain and frequency-domain stability
results in the literature. Specifically, we focus on the comparison between
stability results for a feedback interconnection of two nonlinear systems
stated in terms of frequency-domain conditions. While the Integral Quadratic
Constrain (IQC) theorem can cope with them via a homotopy argument for the
Lurye problem, graph separation results require the transformation of the
frequency-domain conditions into truncated time-domain conditions. To date,
much of the literature focuses on "hard" factorizations of the multiplier,
considering only one of the two frequency-domain conditions. Here it is shown
that a symmetric, "doubly-hard" factorization is required to convert both
frequency-domain conditions into truncated time-domain conditions. By using the
appropriate factorization, a novel comparison between the results obtained by
IQC and separation theories is then provided. As a result, we identify under
what conditions the IQC theorem may provide some advantage
An Overview of Integral Quadratic Constraints for Delayed Nonlinear and Parameter-Varying Systems
A general framework is presented for analyzing the stability and performance
of nonlinear and linear parameter varying (LPV) time delayed systems. First,
the input/output behavior of the time delay operator is bounded in the
frequency domain by integral quadratic constraints (IQCs). A constant delay is
a linear, time-invariant system and this leads to a simple, intuitive
interpretation for these frequency domain constraints. This simple
interpretation is used to derive new IQCs for both constant and varying delays.
Second, the performance of nonlinear and LPV delayed systems is bounded using
dissipation inequalities that incorporate IQCs. This step makes use of recent
results that show, under mild technical conditions, that an IQC has an
equivalent representation as a finite-horizon time-domain constraint. Numerical
examples are provided to demonstrate the effectiveness of the method for both
class of systems
Simplification Methods for Sum-of-Squares Programs
A sum-of-squares is a polynomial that can be expressed as a sum of squares of
other polynomials. Determining if a sum-of-squares decomposition exists for a
given polynomial is equivalent to a linear matrix inequality feasibility
problem. The computation required to solve the feasibility problem depends on
the number of monomials used in the decomposition. The Newton polytope is a
method to prune unnecessary monomials from the decomposition. This method
requires the construction of a convex hull and this can be time consuming for
polynomials with many terms. This paper presents a new algorithm for removing
monomials based on a simple property of positive semidefinite matrices. It
returns a set of monomials that is never larger than the set returned by the
Newton polytope method and, for some polynomials, is a strictly smaller set.
Moreover, the algorithm takes significantly less computation than the convex
hull construction. This algorithm is then extended to a more general
simplification method for sum-of-squares programming.Comment: 6 pages, 2 figure
A Unified Analysis of Stochastic Optimization Methods Using Jump System Theory and Quadratic Constraints
We develop a simple routine unifying the analysis of several important
recently-developed stochastic optimization methods including SAGA, Finito, and
stochastic dual coordinate ascent (SDCA). First, we show an intrinsic
connection between stochastic optimization methods and dynamic jump systems,
and propose a general jump system model for stochastic optimization methods.
Our proposed model recovers SAGA, SDCA, Finito, and SAG as special cases. Then
we combine jump system theory with several simple quadratic inequalities to
derive sufficient conditions for convergence rate certifications of the
proposed jump system model under various assumptions (with or without
individual convexity, etc). The derived conditions are linear matrix
inequalities (LMIs) whose sizes roughly scale with the size of the training
set. We make use of the symmetry in the stochastic optimization methods and
reduce these LMIs to some equivalent small LMIs whose sizes are at most 3 by 3.
We solve these small LMIs to provide analytical proofs of new convergence rates
for SAGA, Finito and SDCA (with or without individual convexity). We also
explain why our proposed LMI fails in analyzing SAG. We reveal a key difference
between SAG and other methods, and briefly discuss how to extend our LMI
analysis for SAG. An advantage of our approach is that the proposed analysis
can be automated for a large class of stochastic methods under various
assumptions (with or without individual convexity, etc).Comment: To Appear in Proceedings of the Annual Conference on Learning Theory
(COLT) 201
- …